From Traditional Code to Generative AI Applications
The landscape of software development is undergoing a fundamental shift. We are moving from rigid, command-driven programming to flexible, natural language-driven Generative AI interaction.
1. Breaking the Command Chain
What it is: Traditional applications rely on fixed Graphical User Interfaces (GUIs) or specific, language-dependent command sets. If a user deviates from the expected input, the system fails.
Why it matters: Generative AI applications offer unprecedented flexibility. They allow users to interact using natural language to achieve complex goals, adapting to intent rather than just syntax.
2. The Principle of Non-determinism
What it is: In traditional code, $1 + 1$ always equals $2$. It is deterministic. Large Language Models (LLMs), conversely, operate on probabilities.
How it works: They can produce different results for the exact same prompt. This variety is managed through specific parameters, most notably Temperature.
3. Building Blocks: Tokens & Temperature
- Tokens: The basic numerical "building blocks" of text used by the model. Words are broken down into these sub-word units.
- Temperature: A setting (ranging from $0.0$ to $1.0$) that controls randomness. Low values yield predictable, focused text; high values encourage creative, diverse outputs.
.env files) to protect your AI resources from unauthorized access.
Set Temperature to
0.0 or 0.1. This minimizes randomness and ensures the model provides the most likely, factual, and consistent definitions rather than creative or hallucinated responses.
Move the
API_KEY from the main code file into an environment variable or a hidden .env file. Use os.getenv("AZURE_OPENAI_KEY") to retrieve it securely at runtime.